72 research outputs found

    Supervised learning with decision margins in pools of spiking neurons.

    Get PDF
    Learning to categorise sensory inputs by generalising from a few examples whose category is precisely known is a crucial step for the brain to produce appropriate behavioural responses. At the neuronal level, this may be performed by adaptation of synaptic weights under the influence of a training signal, in order to group spiking patterns impinging on the neuron. Here we describe a framework that allows spiking neurons to perform such "supervised learning", using principles similar to the Support Vector Machine, a well-established and robust classifier. Using a hinge-loss error function, we show that requesting a margin similar to that of the SVM improves performance on linearly non-separable problems. Moreover, we show that using pools of neurons to discriminate categories can also increase the performance by sharing the load among neurons

    Analytic continuation of residue currents

    Full text link
    Let XX be a complex manifold and f\colon X\to \C^p a holomorphic mapping defining a complete intersection. We prove that the iterated Mellin transform of the residue integral associated to ff has an analytic continuation to a neighborhood of the origin in \C^p

    Neo: an object model for handling electrophysiology data in multiple formats

    Get PDF
    Neuroscientists use many different software tools to acquire, analyze and visualize electrophysiological signals. However, incompatible data models and file formats make it difficult to exchange data between these tools. This reduces scientific productivity, renders potentially useful analysis methods inaccessible and impedes collaboration between labs. A common representation of the core data would improve interoperability and facilitate data-sharing. To that end, we propose here a language-independent object model, named “Neo,” suitable for representing data acquired from electroencephalographic, intracellular, or extracellular recordings, or generated from simulations. As a concrete instantiation of this object model we have developed an open source implementation in the Python programming language. In addition to representing electrophysiology data in memory for the purposes of analysis and visualization, the Python implementation provides a set of input/output (IO) modules for reading/writing the data from/to a variety of commonly used file formats. Support is included for formats produced by most of the major manufacturers of electrophysiology recording equipment and also for more generic formats such as MATLAB. Data representation and data analysis are conceptually separate: it is easier to write robust analysis code if it is focused on analysis and relies on an underlying package to handle data representation. For that reason, and also to be as lightweight as possible, the Neo object model and the associated Python package are deliberately limited to representation of data, with no functions for data analysis or visualization. Software for neurophysiology data analysis and visualization built on top of Neo automatically gains the benefits of interoperability, easier data sharing and automatic format conversion; there is already a burgeoning ecosystem of such tools. We intend that Neo should become the standard basis for Python tools in neurophysiology.EC/FP7/269921/EU/Brain-inspired multiscale computation in neuromorphic hybrid systems/BrainScaleSDFG, 103586207, GRK 1589: Verarbeitung sensorischer Informationen in neuronalen SystemenBMBF, 01GQ1302, Nationaler Neuroinformatik Knote

    PyNN: A Common Interface for Neuronal Network Simulators

    Get PDF
    Computational neuroscience has produced a diversity of software for simulations of networks of spiking neurons, with both negative and positive consequences. On the one hand, each simulator uses its own programming or configuration language, leading to considerable difficulty in porting models from one simulator to another. This impedes communication between investigators and makes it harder to reproduce and build on the work of others. On the other hand, simulation results can be cross-checked between different simulators, giving greater confidence in their correctness, and each simulator has different optimizations, so the most appropriate simulator can be chosen for a given modelling task. A common programming interface to multiple simulators would reduce or eliminate the problems of simulator diversity while retaining the benefits. PyNN is such an interface, making it possible to write a simulation script once, using the Python programming language, and run it without modification on any supported simulator (currently NEURON, NEST, PCSIM, Brian and the Heidelberg VLSI neuromorphic hardware). PyNN increases the productivity of neuronal network modelling by providing high-level abstraction, by promoting code sharing and reuse, and by providing a foundation for simulator-agnostic analysis, visualization and data-management tools. PyNN increases the reliability of modelling studies by making it much easier to check results on multiple simulators. PyNN is open-source software and is available from http://neuralensemble.org/PyNN

    Residue currents associated with weakly holomorphic functions

    Get PDF
    We construct Coleff-Herrera products and Bochner-Martinelli type residue currents associated with a tuple ff of weakly holomorphic functions, and show that these currents satisfy basic properties from the (strongly) holomorphic case, as the transformation law, the Poincar\'e-Lelong formula and the equivalence of the Coleff-Herrera product and the Bochner-Martinelli type residue current associated with ff when ff defines a complete intersection.Comment: 28 pages. Updated with some corrections from the revision process. In particular, corrected and clarified some things in Section 5 and 6 regarding products of weakly holomorphic functions and currents, and the definition of the Bochner-Martinelli type current

    Transfer Functions for Protein Signal Transduction: Application to a Model of Striatal Neural Plasticity

    Get PDF
    We present a novel formulation for biochemical reaction networks in the context of signal transduction. The model consists of input-output transfer functions, which are derived from differential equations, using stable equilibria. We select a set of 'source' species, which receive input signals. Signals are transmitted to all other species in the system (the 'target' species) with a specific delay and transmission strength. The delay is computed as the maximal reaction time until a stable equilibrium for the target species is reached, in the context of all other reactions in the system. The transmission strength is the concentration change of the target species. The computed input-output transfer functions can be stored in a matrix, fitted with parameters, and recalled to build discrete dynamical models. By separating reaction time and concentration we can greatly simplify the model, circumventing typical problems of complex dynamical systems. The transfer function transformation can be applied to mass-action kinetic models of signal transduction. The paper shows that this approach yields significant insight, while remaining an executable dynamical model for signal transduction. In particular we can deconstruct the complex system into local transfer functions between individual species. As an example, we examine modularity and signal integration using a published model of striatal neural plasticity. The modules that emerge correspond to a known biological distinction between calcium-dependent and cAMP-dependent pathways. We also found that overall interconnectedness depends on the magnitude of input, with high connectivity at low input and less connectivity at moderate to high input. This general result, which directly follows from the properties of individual transfer functions, contradicts notions of ubiquitous complexity by showing input-dependent signal transmission inactivation.Comment: 13 pages, 5 tables, 15 figure

    Bupropion Increases Selection of High Effort Activity in Rats Tested on a Progressive Ratio/Chow Feeding Choice Procedure: Implications for Treatment of Effort-Related Motivational Symptoms

    Get PDF
    Background: Depression and related disorders are characterized by deficits in behavioral activation, exertion of effort, and other psychomotor/motivational dysfunctions. Depressed patients show alterations in effort-related decision making and a bias towards selection of low effort activities. It has been suggested that animal tests of effort-related decision making could be useful as models of motivational dysfunctions seen in psychopathology. Methods: Because clinical studies have suggested that inhibition of catecholamine uptake may be a useful strategy for treatment of effort-related motivational symptoms, the present research assessed the ability of bupropion to increase work output in rats responding on a test of effort-related decision-making (ie, a progressive ratio/chow feeding choice task). With this task, rats can choose between working for a preferred food (high-carbohydrate pellets) by lever pressing on a progressive ratio schedule vs obtaining a less preferred laboratory chow that is freely available in the chamber. Results: Bupropion (10.0–40.0 mg/kg intraperitoneal) significantly increased all measures of progressive ratio lever pressing, but decreased chow intake. These effects were greatest in animals with low baseline levels of work output on the progressive ratio schedule. Because accumbens dopamine is implicated in effort-related processes, the effects of bupropion on markers of accumbens dopamine transmission were examined. Bupropion elevated extracellular dopamine levels in accumbens core as measured by microdialysis and increased phosphorylated dopamine and cyclic-AMP related phosphoprotein 32 kDaltons (pDARPP-32) immunoreactivity in a manner consistent with D1 and D2 receptor stimulation. Conclusion: The ability of bupropion to increase exertion of effort in instrumental behavior may have implications for the pathophysiology and treatment of effort-related motivational symptoms in humans

    Locomotion modulates specific functional cell types in the mouse visual thalamus

    Get PDF
    The visual system is composed of diverse cell types that encode distinct aspects of the visual scene and may form separate processing channels. Here we present further evidence for that hypothesis whereby functional cell groups in the dorsal lateral geniculate nucleus (dLGN) are differentially modulated during behavior. Using simultaneous multi-electrode recordings in dLGN and primary visual cortex (V1) of behaving mice, we characterized the impact of locomotor activity on response amplitude, variability, correlation and spatiotemporal tuning. Locomotion strongly impacts the amplitudes of dLGN and V1 responses but the effects on variability and correlations are relatively minor. With regards to tunings, locomotion enhances dLGN responses to high temporal frequencies, preferentially affecting ON transient cells and neurons with nonlinear responses to high spatial frequencies. Channel specific modulations may serve to highlight particular visual inputs during active behaviors
    corecore